168 research outputs found

    Relax-and-fix heuristics applied to a real-world lot-sizing and scheduling problem in the personal care consumer goods industry

    Full text link
    This paper addresses an integrated lot-sizing and scheduling problem in the industry of consumer goods for personal care, a very competitive market in which the good customer service level and the cost management show up in the competition for the clients. In this research, a complex operational environment composed of unrelated parallel machines with limited production capacity and sequence-dependent setup times and costs is studied. There is also a limited finished-goods storage capacity, a characteristic not found in the literature. Backordering is allowed but it is extremely undesirable. The problem is described through a mixed integer linear programming formulation. Since the problem is NP-hard, relax-and-fix heuristics with hybrid partitioning strategies are investigated. Computational experiments with randomly generated and also with real-world instances are presented. The results show the efficacy and efficiency of the proposed approaches. Compared to current solutions used by the company, the best proposed strategies yield results with substantially lower costs, primarily from the reduction in inventory levels and better allocation of production batches on the machines

    On complexity and convergence of high-order coordinate descent algorithms

    Full text link
    Coordinate descent methods with high-order regularized models for box-constrained minimization are introduced. High-order stationarity asymptotic convergence and first-order stationarity worst-case evaluation complexity bounds are established. The computer work that is necessary for obtaining first-order ε\varepsilon-stationarity with respect to the variables of each coordinate-descent block is O(ε(p+1)/p)O(\varepsilon^{-(p+1)/p}) whereas the computer work for getting first-order ε\varepsilon-stationarity with respect to all the variables simultaneously is O(ε(p+1))O(\varepsilon^{-(p+1)}). Numerical examples involving multidimensional scaling problems are presented. The numerical performance of the methods is enhanced by means of coordinate-descent strategies for choosing initial points

    Evaluation complexity for nonlinear constrained optimization using unscaled kkt conditions and high-order models

    Get PDF
    FAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOThe evaluation complexity of general nonlinear, possibly nonconvex, constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an epsilon-approximate first-order critical point of the problem can be computed in order O(epsilon(1-2(p+1)/p)) evaluations of the problem's functions and their first p derivatives. This is achieved by using a two-phase algorithm inspired by Cartis, Gould, and Toint [SIAM J. Optim., 21 (2011), pp. 1721-1739; SIAM J. Optim., 23 (2013), pp. 1553-1574]. It is also shown that strong guarantees (in terms of handling degeneracies) on the possible limit points of the sequence of iterates generated by this algorithm can be obtained at the cost of increased complexity. At variance with previous results, the epsilon-approximate first-order criticality is defined by satisfying a version of the KKT conditions with an accuracy that does not depend on the size of the Lagrange multipliers.The evaluation complexity of general nonlinear, possibly nonconvex, constrained optimization is analyzed. It is shown that, under suitable smoothness conditions, an epsilon-approximate first-order critical point of the problem can be computed in order O(epsilon(1-2(p+1)/p)) evaluations of the problem's functions and their first p derivatives. This is achieved by using a two-phase algorithm inspired by Cartis, Gould, and Toint [SIAM J. Optim., 21 (2011), pp. 1721-1739; SIAM J. Optim., 23 (2013), pp. 1553-1574]. It is also shown that strong guarantees (in terms of handling degeneracies) on the possible limit points of the sequence of iterates generated by this algorithm can be obtained at the cost of increased complexity. At variance with previous results, the epsilon-approximate first-order criticality is defined by satisfying a version of the KKT conditions with an accuracy that does not depend on the size of the Lagrange multipliers.262951967FAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOFAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOCNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO2010/10133-0; 2013/03447-6; 2013/05475-7; 2013/07375-0; 2013/23494-9304032/2010-7; 309517/2014-1; 303750/2014-6; 490326/2013-

    Guaranteed clustering and biclustering via semidefinite programming

    Get PDF
    Identifying clusters of similar objects in data plays a significant role in a wide range of applications. As a model problem for clustering, we consider the densest k-disjoint-clique problem, whose goal is to identify the collection of k disjoint cliques of a given weighted complete graph maximizing the sum of the densities of the complete subgraphs induced by these cliques. In this paper, we establish conditions ensuring exact recovery of the densest k cliques of a given graph from the optimal solution of a particular semidefinite program. In particular, the semidefinite relaxation is exact for input graphs corresponding to data consisting of k large, distinct clusters and a smaller number of outliers. This approach also yields a semidefinite relaxation for the biclustering problem with similar recovery guarantees. Given a set of objects and a set of features exhibited by these objects, biclustering seeks to simultaneously group the objects and features according to their expression levels. This problem may be posed as partitioning the nodes of a weighted bipartite complete graph such that the sum of the densities of the resulting bipartite complete subgraphs is maximized. As in our analysis of the densest k-disjoint-clique problem, we show that the correct partition of the objects and features can be recovered from the optimal solution of a semidefinite program in the case that the given data consists of several disjoint sets of objects exhibiting similar features. Empirical evidence from numerical experiments supporting these theoretical guarantees is also provided

    On Augmented Lagrangian Methods with General Lower-Level Constraints

    Full text link

    Filter-based DIRECT method for constrained global optimization

    Get PDF
    This paper presents a DIRECT-type method that uses a filter methodology to assure convergence to a feasible and optimal solution of nonsmooth and nonconvex constrained global optimization problems. The filter methodology aims to give priority to the selection of hyperrectangles with feasible center points, followed by those with infeasible and non-dominated center points and finally by those that have infeasible and dominated center points. The convergence properties of the algorithm are analyzed. Preliminary numerical experiments show that the proposed filter-based DIRECT algorithm gives competitive results when compared with other DIRECT-type methods.The authors would like to thank two anonymous referees and the Associate Editor for their valuable comments and suggestions to improve the paper. This work has been supported by COMPETE: POCI-01-0145-FEDER-007043 and FCT - Fundac¸ao para a Ciência e Tecnologia within the projects UID/CEC/00319/2013 and ˆ UID/MAT/00013/2013.info:eu-repo/semantics/publishedVersio

    The inverse problem of determining the filtration function and permeability reduction in flow of water with particles in porous media

    Get PDF
    The original publication can be found at www.springerlink.comDeep bed filtration of particle suspensions in porous media occurs during water injection into oil reservoirs, drilling fluid invasion of reservoir production zones, fines migration in oil fields, industrial filtering, bacteria, viruses or contaminants transport in groundwater etc. The basic features of the process are particle capture by the porous medium and consequent permeability reduction. Models for deep bed filtration contain two quantities that represent rock and fluid properties: the filtration function, which is the fraction of particles captured per unit particle path length, and formation damage function, which is the ratio between reduced and initial permeabilities. These quantities cannot be measured directly in the laboratory or in the field; therefore, they must be calculated indirectly by solving inverse problems. The practical petroleum and environmental engineering purpose is to predict injectivity loss and particle penetration depth around wells. Reliable prediction requires precise knowledge of these two coefficients. In this work we determine these quantities from pressure drop and effluent concentration histories measured in one-dimensional laboratory experiments. The recovery method consists of optimizing deviation functionals in appropriate subdomains; if necessary, a Tikhonov regularization term is added to the functional. The filtration function is recovered by optimizing a non-linear functional with box constraints; this functional involves the effluent concentration history. The permeability reduction is recovered likewise, taking into account the filtration function already found, and the functional involves the pressure drop history. In both cases, the functionals are derived from least square formulations of the deviation between experimental data and quantities predicted by the model.Alvarez, A. C., Hime, G., Marchesin, D., Bedrikovetski, P

    Dificultades para codificar, relacionar y categorizar problemas verbales algebraicos: dos estudios con estudiantes de secundaria y profesores en formación

    Get PDF
    En resolución de problemas verbales por transferencia, la activación de problemas ya conocidos que sirvan de guía, depende de las analogías percibidas entre éstos y el problema a resolver. Se desarrollan dos estudios relacionados para analizar en qué características se basan los estudiantes para codificar problemas y detectar sus analogías, en tareas de categorización (sorting). Se utilizaron técnicas cuantitativas y cualitativas combinadas. Primero se analizó cómo los estudiantes de secundaria son influidos por diferentes variables características de problemas de ciencias. Una gran proporción de sujetos no fue capaz de percibir las analogías y diferencias adecuadas entre problemas. El segundo estudio trató de avanzar una explicación de estos resultados. El nivel académico y la familiaridad con las temáticas fueron factores significativos, pero los futuros profesores participantes mostraron demasiadas dificultades, alertando sobre la conveniencia de revisar algunos supuestos instruccionales habituales
    corecore